基于模拟的推理的神经后验估计方法可能不适合通过在多个观测值上进行条件来处理后验分布,因为它们可能需要大量的模拟器调用以产生准确的近似值。神经可能性估计方法可以自然处理多个观察结果,但需要单独的推论步骤,这可能会影响其效率和性能。我们引入了一种基于模拟的推理的新方法,该方法享有两种方法的好处。我们建议对单个观察值引起的后验分布进行建模,并引入采样算法,该算法将学习分数结合在一起以有效地从目标中进行样本。
translated by 谷歌翻译
许多方法都存在基于未经调整的Langevin过渡的强大变分分布的方法。其中大多数是使用多种不同方法和技术开发的。不幸的是,缺乏统一的分析和推导使开发新方法和关于现有方法的推理成为具有挑战性的任务。我们解决了这一分析,该分析统一并概括了这些现有技术。主要思想是通过数值模拟阻尼不足的Langevin扩散过程及其时间逆转来增强目标和变异性。这种方法的好处是双重的:它为许多现有方法提供了统一的配方,并简化了新的方法。实际上,使用我们的公式,我们提出了一种结合先前现有算法的优势的新方法。它使用了不足的Langevin过渡和通过分数网络参数参数的强大增强。我们的经验评估表明,我们提出的方法在广泛的任务中始终优于相关基线。
translated by 谷歌翻译
分层模型代表了推理算法的挑战性设置。 MCMC方法难以扩展到具有许多局部变量和观测值的大型模型,并且由于使用简单的变异家族,变异推理(VI)可能无法提供准确的近似值。一些变异方法(例如,重要性加权VI)整合了蒙特卡洛方法以提供更好的准确性,但是这些方法往往不适合层次模型,因为它们不允许亚采样,并且其性能往往会降低高维模型。我们基于分别针对每组局部随机变量的拧紧方法(例如重要性加权)的应用,为分层模型提出了一个新的差异界限家族。我们表明,我们的方法自然允许使用子采样来获得公正的梯度,并且它完全利用了通过在较低维空间中独立应用它们来建立更紧密的下限的方法的力量基线。
translated by 谷歌翻译
因果推断对于跨业务参与,医疗和政策制定等领域的数据驱动决策至关重要。然而,关于因果发现的研究已经与推理方法分开发展,从而阻止了两个领域方法的直接组合。在这项工作中,我们开发了深层端到端因果推理(DECI),这是一种基于流动的非线性添加噪声模型,该模型具有观察数据,并且可以执行因果发现和推理,包括有条件的平均治疗效果(CATE) )估计。我们提供了理论上的保证,即DECI可以根据标准因果发现假设恢复地面真实因果图。受应用影响的激励,我们将该模型扩展到具有缺失值的异质,混合型数据,从而允许连续和离散的治疗决策。我们的结果表明,与因果发现的相关基线相比,DECI的竞争性能和(c)在合成数据集和因果机器学习基准测试基准的一千多个实验中,跨数据类型和缺失水平进行了估计。
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
In the future, service robots are expected to be able to operate autonomously for long periods of time without human intervention. Many work striving for this goal have been emerging with the development of robotics, both hardware and software. Today we believe that an important underpinning of long-term robot autonomy is the ability of robots to learn on site and on-the-fly, especially when they are deployed in changing environments or need to traverse different environments. In this paper, we examine the problem of long-term autonomy from the perspective of robot learning, especially in an online way, and discuss in tandem its premise "data" and the subsequent "deployment".
translated by 谷歌翻译
Factorization machines (FMs) are a powerful tool for regression and classification in the context of sparse observations, that has been successfully applied to collaborative filtering, especially when side information over users or items is available. Bayesian formulations of FMs have been proposed to provide confidence intervals over the predictions made by the model, however they usually involve Markov-chain Monte Carlo methods that require many samples to provide accurate predictions, resulting in slow training in the context of large-scale data. In this paper, we propose a variational formulation of factorization machines that allows us to derive a simple objective that can be easily optimized using standard mini-batch stochastic gradient descent, making it amenable to large-scale data. Our algorithm learns an approximate posterior distribution over the user and item parameters, which leads to confidence intervals over the predictions. We show, using several datasets, that it has comparable or better performance than existing methods in terms of prediction accuracy, and provide some applications in active learning strategies, e.g., preference elicitation techniques.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
This case study investigates the extent to which a language model (GPT-2) is able to capture native speakers' intuitions about implicit causality in a sentence completion task. We first reproduce earlier results (showing lower surprisal values for pronouns that are congruent with either the subject or object, depending on which one corresponds to the implicit causality bias of the verb), and then examine the effects of gender and verb frequency on model performance. Our second study examines the reasoning ability of GPT-2: is the model able to produce more sensible motivations for why the subject VERBed the object if the verbs have stronger causality biases? We also developed a methodology to avoid human raters being biased by obscenities and disfluencies generated by the model.
translated by 谷歌翻译
Semi-supervised anomaly detection is a common problem, as often the datasets containing anomalies are partially labeled. We propose a canonical framework: Semi-supervised Pseudo-labeler Anomaly Detection with Ensembling (SPADE) that isn't limited by the assumption that labeled and unlabeled data come from the same distribution. Indeed, the assumption is often violated in many applications - for example, the labeled data may contain only anomalies unlike unlabeled data, or unlabeled data may contain different types of anomalies, or labeled data may contain only 'easy-to-label' samples. SPADE utilizes an ensemble of one class classifiers as the pseudo-labeler to improve the robustness of pseudo-labeling with distribution mismatch. Partial matching is proposed to automatically select the critical hyper-parameters for pseudo-labeling without validation data, which is crucial with limited labeled data. SPADE shows state-of-the-art semi-supervised anomaly detection performance across a wide range of scenarios with distribution mismatch in both tabular and image domains. In some common real-world settings such as model facing new types of unlabeled anomalies, SPADE outperforms the state-of-the-art alternatives by 5% AUC in average.
translated by 谷歌翻译